Cryptome DVDs are offered by Cryptome. Donate $25 for two DVDs of the Cryptome 12-years collection of 46,000 files from June 1996 to June 2008 (~6.7 GB). Click Paypal or mail check/MO made out to John Young, 251 West 89th Street, New York, NY 10024. The collection includes all files of cryptome.org, jya.com, cartome.org, eyeball-series.org and iraq-kill-maim.org, and 23,000 (updated) pages of counter-intelligence dossiers declassified by the US Army Information and Security Command, dating from 1945 to 1985.The DVDs will be sent anywhere worldwide without extra cost.

Google
 
Web cryptome.org cryptome.info jya.com eyeball-series.org cryptome.cn


8 May 1998


High-performance Computing, National Security Applications, and
Export Control Policy at the Close of the 20th Century

Table of Contents


191

APPENDIX A. Applications of National Security Importance

The following table provides a summary of national security applications reviewed in this study. It indicates the kind of problem solved, the HPC configuration on which it was solved, and the time required for the solution.

This selection, compiled through a combination of direct communications with practitioners and a review of published literature, is not an exhaustive listing. However, it does include many of the more important national security applications, and gives policy makers a rough idea of the kinds of applications being solved at various performance levels.

Two points in particular should be kept in mind when reading the table. First, the applications shown here constitute data points that often, in practice, lie along a continuum. The specific size of the application is often a function of the computational resources available in a given configuration and the "threshold of patience" of the practitioner. If the configuration available were slightly larger, or smaller, the practitioners in most cases would solve the same kinds of problem, but perhaps with a different grid size, or time-step, etc. In short, the absence of a particular type of application at some performance level should not be interpreted as a statement that no version of that application can be solved at that performance level.

Second, the CTP value shown is the composite theoretical performance of the configuration used to solve the problem. It is well known that any metric, including the CTP, does not perfectly predict the performance of all systems on all applications. Consequently, the CTP measures given here should be used only as rough indicators of the performance level required for a particular kind of application. The fact that a given problem was run on a machine with a CTP of n Mtops does not mean that all systems with CTP > n Mtops can solve the problem, or that all systems with CTP < n Mtops can not. The CTP simply does not have this kind of precision.

The following acronyms are used for applications categories:

CCM Computational Chemistry and
Materials Science
CWO Climate/Weather/Ocean Modeling
and Simulation
CEA Computational Electromagnetics
and Acoustics
FMS Forces Modeling and Simulation/
C4I
CFD Computational Fluid Dynamics Nuclear Nuclear weapons development and
stockpile maintenance
CSM Computational Structural
Mechanics
SIP Signal/Image Processing


192

Machine Year CTP Category Time Problem Problem size
1 VAX 6210 1 SIP 35 min Focus an image [1] 5040 x 1260 samples (18km
x 8 km)
2 Cray-1 1984 195 CFD 1.4 CPU
hours
SCRAMJET wing-fuselage aerodynamic
interaction simulation [2]
56,730 grid points, Mach 6
3 Cray-1S early
1980s
195 CFD 20 CPU hr Simulation of after-body drag for a fuselage with
propulsive jet. Reynolds averaged Navier-Stokes
[3]
4 Cray-1S early 1980s 195 nuclear 1127.5 CPU sec LANL Hydrodynamics code 3 [4]
5 Cray-1S early 1980s 195 nuclear 117.2 CPU sec LANL Hydrodynamics code 1 [4]
6 Cray-1S early 1980s 195 nuclear 4547.1
CPU sec
LANL Hydrodynamics code 2 [4]
7 Cray-1 195 24 hours Crash simulation [5] 5,500 elements
8 Cray-1S 195 CFD Nonlinear inviscid (STAGE II): Above , plus
Transonic pressure loads; wave drag [3,6]
10,000 grid points
9 Cosmic Cube (6) 1991 293 SIP 1.81
millisec
Discrete Fourier transform algorithm [7] 5040 complex data sample
10 Cray X-MP/1 mid 1980s 316 nuclear two-dimensional, reduced physics simulation
11 Cray
XMP/48
(1 Proc)
1988 353 CFD 20-50 hr Flow simulation around complete F-16A
aircraft (wings, fuselage, inlet, vertical and
horizontal tails, nozzle) at 6 deg angle of
attack, Mach 0.9. Reynolds Average Navier
Stokes. Reynolds number = 4.5 million.
[8]
1 million grid points, 8
Mwords (one 2 Mword
zone in memory at a time),
2000-5000 iterations
12 Cray XMP/l 1990 353 CCM 1000 hours Molecular dynamics of bead-spring
model of a polymer chain [9]
Chainlength = 400
13 Cray J916/1 1996 450 CFD 300 CPU hr Modeling of transonic flow around AS28G
wing/body/pylon/nacelle configuration. 3D
Reynolds Averaged full Navier-Stokes
solution. [10]
3.5 million nodes, 195
Mwords memory
14 Origin2000/1 1997 459 SIP 5.650 s RT_STAP benchmark (hard) [11] 2.7 million samples per .161
sec
15 Cray YMP/1 1987 500 CSM 200 hours 3D shock physics simulation [12] 200,000 cells
16 Cray YMP/1 1990 500 CSM 39 CPU sec Static analysis of aerodynamic loading on
solid rocket booster [13]
10,453 elements,
9206 nodes,
54,870 DOF19
256 Mword memory
17 Cray YMP 1991 500 CCM 1000 hours Molecular dynamics modeling of
hydrodynamic interactions in "semi-
dilute" and concentrated polymer
solutions [9]
single chain

60 monomers

18 Cray YMP 1991 500 CCM 1000 hours Modeling of thermodynamics of
polymer mixtures [9]
lattice size - 1123

chain size = 256

19 Cray YMP/1 1991 500 CFD 40 CPU h Simulation of viscous flow about the Harrier
Jet (operating in-ground effect modeled)
[14]
2.8 million points, 20
Mwords memory.
20 Cray YMP/1 1993 500 CCM 1.47 CPUs/
timestep
MD simulation using short-range forces model
applied to 3D configuration of liquid near solid
state point [15]
100,000 atoms


193

Machine Year CTP Time Problem Problem size
21 Cray YMP/1 1996 500 CFD 170 CPU hr
(est)
Modeling of transonic flow around AS28G
wing/body/pylon/nacelle configuration. 3D
Reynolds Averaged full Navier-Stokes
solution. [10]
3.5 million nodes, 195
Mwords memory
22 Cray YMP/1 1996 500 CFD 6 CPU hr Modeling of transonic flow around F5 wing
(Aerospatiale). 3D Reynolds Averaged full
Navier-Stokes solution. [10]
442368 cells (192x48x48).
23 Cray YMP 1996 500 CFD 8 CPU hr,
3000
timesteps
Large-Eddy simulation at high Reynolds
3000 number [16]
2.1 million grid points, 44
Mwords
24 workstation 1997 500 CSM 2D modeling of simple projectile striking
simple target[17]
510,000s of grid points
25 Cray Y-MP/1 late 1980s 500 nuclear 1000's of hours two-dimensional, almost full physics (e.g.
Monte Carlo neutron transport
26 Cray Y-MP/1 late 1980s 500 nuclear 1000's of hours 1D, full physics simulation
27 Cray YMP/1 500 CEA 5 CPU hr
per
timestep
Signature of modern fighter at
per fixed incident angle at 1GHz [18]
50 million grid points @ 18
words/grid point
28 Cray Y-MP/1 500 nuclear two-dimensional, reduced physics
simulations
100-500 MBytes
29 Mercury Race (5
x 4 i860
processors,
Ruggedized
1997 866 SIP SAR system aboard P3-C Orion maritime
patrol aircraft [19]
30 Cray YMP/2 256
MW
1990 958 CSM 19.79 CPUs Static analysis of aerodynamic loading on
solid rocket booster [13]
10,453 elements, 9206 nodes,
54,870 DOF
31 Cray Y-MP/2 late
1980s
958 CFD Design of F-22 fighter [20]
32 CM-5/32 1993 970 CCM 449 CPU
sec
Determination of structure of Eglin-C
molecular system [21]
530 atoms, with 1689 distance and
87 dihedral constraints
33 Cray-2/l 1987 1300 CSM 400 hours 3D modeling of projectile striking target.
Hundreds of microsecond timescales. [17]
.5-1.5 million grid points
256 Mword memory
34 Cray-2 1992 1098 CSM 5 CPU hours Modeling aeroelastic response of a
detailed wing-body configuration using
a potential flow theory [13]
35 Cray-2 1992 1098 CSM 6 CPU days Establish transonic flutter boundary for a
given set of aeroelastic parameters [13]
36 Cray-2 1992 1098 CSM 600 CPU days Full Navier-Stokes equations [13]
37 Cray-2 1098 CSM 2 hours 3D modeling of symmetric, transonic, low
angel of attack impact of warhead and
defensive structure [20]
38 Cray-2 1098 CSM 200 hours Penetration model against advanced armor
[20]
39 Cray-2 1098 CSM 200 hours Modeling full kinetic ill effects against
hybrid armors [20]
40 Cray-2 1098 CSM 40 hours 3D modeling of symmetric, transonic, low
angel of attack impact of warhead and
defensive structure [20]


194

Machine Year CTP Time Problem Problem size
41 Cray-2 1984 1300 CFD 15 CPU m Simulation of 2D viscous flow field about
an airfoil [3]
42 Cray-2 1988 1300 CFD 20 hr Simulation of flow about the space shuttle
(Orbiter, External Tank, Solid Rocket
Boosters), Mach 1.05, Reynolds Averaged
Navier Stokes, Reynolds number = 4
million (3% model) [8]
750,000 grid points, 6
Mwords.
43 Cray-2 1980s 1300 CFD 100 CPU h Simulation of external flow about an aircraft
at cruise. Steady flow. Steady Navier-
Stokes simulation. [14]
1.0 million grid points.
44 1995 1400 nuclear Credible one- and two-dimensional
simulations [22]
45 Cray C90/1 1993 1437 CCM .592 sec/
timestep
MD simulation using short-range
forces model applied to 3D
configuration of liquid near solid state
point [15]
100,000 atoms
46 Cray C90/1 1994 1437 CEA 161 CPU
hour
Compute magnitude of scattered
hour wave-pattern on X24C re-entry
aerospace vehicle [23]
181 x 59 x 162 grid ( 1.7 million)
47 Cray C90/1 1994 1437 CFD overnight Modeling of flow over a submarine hull
with no propulsion unit included [24]
1-2 million grid points
48 Cray C916 1994 1437 CEA Radar cross section on perfectly
conducting sphere [25]
48 x 48 x 96 (221 thousand)
49

50

Cray C90/1 1995 1437 CEA 1 hour Submarine acoustic signature for
single frequency
50 Cray C90/1 1995 1437 CEA 1 hour Submarine acoustic signature for
single frequency [26]
51 Cray C90/1 1994 1437 CEA 12,745 sec Radar cross section of perfectly
conducting sphere, wave
number 20 [27]
97 x 96 x 192 (1.7million grid
points), 16.1 points per
wavelength
52 Cray C90/1 1995 1437 CEA 200 hour Submarine acoustic signature for
full spectrum of frequencies [26]
53 Cray C90/1 1995 1437 CWO CCM2, Community Climate Model, T42 [28] 128 x 64 transform grid, 4.2
Gflops
54 Cray C90/1 1996 1437 CFD 19 CPU hr Simulation of turbulent flow around the
F/A-18 aircraft at 60 degree angle of attack.
Mach 0.3. Reynolds number = 8.88 million
[29]
1.25 million grid points.
100 Mwords of memory.
55 Cray C90/1 1996 1437 CFD 200 CPU hr Simulation of unsteady flow about an F-18
High Alpha Research Vehicle at 30, 45, 60
deg angle of attack [30]
2.5 million grid points for
half-body modeling. 40
MWords memory
56 Cray C90/1 1996 1437 CFD 3 CPU hr Modeling of flow over a blended wing/body
aircraft at cruise. [31]
45 Mwords of memory
58 SGI
PowerChallenge
4 nodes
1997 1686 CCM overnight Explosion simulation [32] 30 thousand
diatomic molecules
59 SGI Onyx 1990 1700 SIP Attack and Launch Early Reporting to
Theater (LERT) [20]
60 Mercury Race (52
processor, i860)
1996 1773 SIP Sonar system for Los Angeles Class
submarines [33]


195-196

Machine Year CTP Time Problem Problem size
61 Cray YMP/4 256
MW
1990 1875 CSM 10 CPU s Static analysis of aerodynamic loading on
solid rocket booster [13]
10,453 elements, 9206 nodes,
54,870 DOF
62 Intel iPSC860/64 1993 2097 CCM .418
CPUs/
timestep
MD simulation using short-range
forces model applied to 3D
configuration of liquid near solid state
point [15]
100,000 atoms
63 Intel iPSC860/64 1993 2097 CCM 3.68 sec/
timestep
MD simulation using short-range
forces model applied to 3D
configuration of liquid near solid state
point [15]
1 million atoms
64 Cray 2, 4 proc 1990 2100 CSM 400 hours armor/anti-armor, 3-D
65 Cray
T3D/16
1996 2142 CFD 20,000 CPU
sec. 50 CPU
sec x 400
steps
Aerodynamics of missile at Mach 3.5.
Reynolds number = 6.4 million [34]
500x150 (75,000)
elements in mesh. 381,600
equations solved every
timestep.
66 CM-2 2471 SIP 10 minutes Creation of synthetic aperture radar image [20]
67 Cray C90/2 1993 2750 117 sec Determination of structure of Eglin-C
molecular system [21]
530 atoms, with 1689 distance and
87 dihedral constraints
68 Cray C90/2 1993 2750 CCM 12269 sec Determination of structure of E. coli
trp repressor molecular system [21]
1504 atoms

6014 constraints

69 iPSC860/
128
1997 3485 CFD 120 hr Small unmanned vehicle, fully turbulent
70 iPSC860/
128
1997 3485 CFD 5 days Full ship flow model [35]
71 iPSC860/
128
1997 3485 CFD 5 days Small unmanned undersea vehicle. Fully
turbulent model with Reynolds numbers.
[39]
2.5 million grid points
72 Cray YMP/8 1989 3708 CFD Model of flow around a fully appended
submarine. Steady state, non-viscous flow
model. [35]
250,000 grid points
73 Cray Y-MP/8 early
1990s
3708 CSM 200 hours 3D shock physics simulation [12] 6 million cells
74 Cray Y-MP/8 mid
1990s
3708 CSM 10-40 hours 3D shock physics simulation [12] 100K- 1  million cells
75 IBM SP1/64 1993 4074 CCM 1.11 sec/
timestep
Molecular dynamics of SiO2 system [36] 0.53 million atoms
76 Intel Paragon 4600 CEA Non-acoustic anti-submarine warfare
sensor development [20]
77 IBM SP-2/
32
1996 4745 CFD 80 minutes 3D unsteady incompressible time-averaged
Navier-Stokes. Multiblock transformed
coordinates. [37]
3.3 million points
78 IBM SP-2/32 1997 4745 CFD Helicopter rotor motion coupled with rotor
CFD, predict 3D tip-relief flow effect,
parallel approximate factorization method.
[38]
79 Origin2000/
12
1997 4835 CFD 4329 sec CFL3D applied to a wing-body
configuration: time-dependent thin-layer
Navier-Stokes equation in 3D, finite-
volume, 3 multigrid levels. [39]
3.5 million points
80 Intel Paragon/150 early
1990s
4864 CFD JAST aircraft design [20]
81 Cray C90/4 1993 5375 CFD 1 week Modeling of flow around a smooth ellipsoid
submarine. Turbulent flow, fixed angle of
attack, [35]
2.5 million grid points
82 CM-5/128 1993 5657 CCM 436 CPU
sec
Determination of structure of Eglin-C
molecular system [21]
530 atoms, with 1689 distance and
87 dihedral constraints
83 CM-5/128 1993 5657 CCM 6799 CPU
sec
Determination of structure of E. coli
trp repressor molecular system [21]
1504 atoms, with 6014 constraints
84 Origin2000/16 1997 5908 SIP .39 s RT_STAP benchmark (hard) [11] 2.7 million samples per .161 sec
85 IBM SP-2/45 1997 6300 CFD 2 to 4 hours Helicopter blade structural optimization
code, gradient-based optimization technique
to measure performance changes as each
design variable is varied. [38]
up to 90 design variables,
one run per variable


197-198

Machine Year CTP Category Time Problem Problem size
86 Cray T3D/64 1995 6332 CWO CCM2, Community Climate Model, T42 [28] 128 x 64 transform grid, 608
Mflops
87 IBM SP-2/64 1997 7100 CFD 2 to 4 hours Helicopter rotor free-wake model, high-
order vortex element and wake relaxation.
[38]
88 Paragon 256 1995 7315 CCM 82 sec/
timestep
Particle simulation interacting through
the standard pair-wise 6-12 Lennard-
Jones potential [40]
50 million particles
89 CEA Bottom contour modeling of shallow water
in submarine design [20]
90 SIP Topographical Synthetic Aperture Radar data
processing [20]
91 Paragon /321 1995 8263 SIP 2D FFT [41] 200 x 1024 x 1024 (200 Mpixels)
images/sec
92 Intel Paragon/321 8980 SIP Development of algorithms for Shipboard
infrared search & tracking (SIRST) [20]
93 SGI
PowerChallenge
(R8000/150)
/16
1996 9510 CFD 3.6 hr, 3000
timesteps
Large-Eddy simulation at high Reynolds
number [16]
2.1 million grid points, 44
Mwords
94 ORNL
Paragon/360
9626 FMS Synthetic forces experiments [42] 5713 vehicles, 6,697 entities
95 10000 SIP Long-range unmanned aerial vehicles
(UAV) on-board data processing [20]
96 Cray T3D/128 1995 10056 CEA 12,745 sec Radar cross section of perfectly
conducting sphere, wave
number 20 [27]
97 x 96 x 192 (1.7 million grid
points), 16.1 points per
wavelength
97 Cray T3D/128 1995 10056 CEA 2,874 s Radar cross section of perfectly
conducting sphere [27]
128 x 96 x 92 (2.4 million cells)
600 timesteps
98 CM-5/256 1993 10457 CCM 492 CPU
sec
Determination of structure of Eglin-C
molecular system [21]
530 atoms, with 1689 distance and
87 dihedral constraints
99 CM-5/256 1993 10457 CCM 7098 CPU
sec
Determination of structure of E. coli
trp repressor molecular system [21]
1504 atoms, with 6014 constraints
100 Cray C98 1994 10625 CWO -5 hrs Global atmospheric forecast, Fleet
Numerical operational run [43]
480 x 240 grid;
18 vertical layers
101 Origin2000/32 1997 11768 SIP .205 s RT_STAP benchmark (hard) [11] 2.7 million samples per .161 sec
102 Origin2000/32 1997 11768 CWO SC-MICOM, global ocean forecast, two-tier
communication pattern [17]
103 Intel Paragon/512 1995 12680 CCM 84 CPUs/
timestep
MD simulation of 102.4 million particles using pair-wise 6-12 Lennard Jones
potential [40]
102.4 million atoms
104 IBM SP2/128 1995 13147 CEA 3,304.2 s Radar cross section of perfectly
conducting sphere [27]
128 x 96 x 92 (2.4 million cells)
600 timesteps
105 Maui SP2/128 13147 FMS 2 hours Synthetic forces experiments [42] 5086 vehicles
106 Intel Touchstone
Delta/512
1993 13236 CCM 4.84 sec/
timestep
Molecular dynamics of SiO2 system [36] 4.2 million atoms
107 Intel Paragon 1995 13236 SIP 55 sec Correlation processing of 20 seconds worth
of SIR-C/S-SAR data from Space Shuttle [45]
108 NASA Ames SP2/
139
14057 FMS 2 hours Synthetic forces experiments [42] 5464 vehicles
109 IBM SP-2/128 1995 14200 CWO PCCM2, Parallel CCM2, T42 [46] 128 x 64 transform grid, 2.2
Gflops
110 Origin2000/40 1997 14698 CSM Crash code, PAM
111 IBM SP-2/160 1995 15796 CWO AGCM, Atmospheric General Circulation
Model [47]
144 x 88 grid points, 9 vertical
levels, 2.2 Gflops
112 Cray C912 1996 15875 CSM 23 CPU hours Water over C4 explosive in container
above wet sand,
alternate scenario: container next to
building, finite element [48]
38,000 elements,
230 msec simulated time,
16 Mwords memory



199-200

Machine Year CTP Time Problem Problem size
113 Cray C912 1996 15875 CSM 36 hours Water over C4 over sand
114 Cray C912 1996 15875 CSM 435 CPU hours Water over C4 explosive in container
above wet sand,
building at a distance[48]
13 million cells,
12.5 simulated time
115 Cray C912 1997 15875 CWO ~5 hrs Global atmospheric forecast, Fleet
Numerical operational run [43]
480 x 240 grid; 24 vertical layers
116 Cray C912 1997 15875 CWO 1 hr Global ocean forecast, Fleet
Numerical operational run [49]
1/4 degree, 25 km resolution
117 ORNL
Paragon/680
16737 FMS Synthetic forces experiments [42] 10913 vehicles, 13,222 entities

118 Cray T3D/256 1993 17503 CCM .0509
CPUs/
timestep
MD simulation using short-range
forces model applied to 3D
configuration of liquid near solid state
point [15]
100,000 atoms
119 Cray T3D/256 1993 17503 CCM .405
CPUs/
timestep
MD simulation using short-range
forces model applied to 3D
configuration of liquid near solid state
point [15]
1 million atoms
120 Cray T3D/256 1993 17503 CCM 1.86
CPUs/
timestep
MD simulation using short-range
forces model applied to 3D
configuration of liquid near solid state
point [15]
5 million atoms
121 Cray T3D/256 1995 17503 CWO Global weather forecasting model, National
Meteorological Center, T170 [50]
32 vertical levels, 190 x 380 grid
points, 6.1 Gflops
122 Cray T3D/256 1995 17503 CWO AGCM, Atmospheric General Circulation
Model [47]
144 x 88 grid points, 9 vertical
levels, 2.5 Gflops
123 Cray T3D/256 1996 17503 CWO 105 min ARPS, Advanced Regional Prediction
System, v 4.0, fine scale forecast [51]
96 x 96 cells, 288 x 288 km, 7 hr
forecast
124 Cray
T3D/256
1996 17503 CFD 2 hr, 3000
timesteps
Large-Eddy simulation at high Reynolds
number [16]
2.1 million grid points, 44
Mwords
125 Cray
T3D/256
1996 17503 CFD 52,000 CPU
sec. 130 CPU
sec x 400
timesteps
Aerodynamics of missile at Mach 2.5, 14-
degrees angle of attack for laminar and
turbulent viscous effects. [34]
944,366 nodes and
918,000 elements.
4,610,378 coupled
nonlinear equations solved
every timestep.
126 CM-5/512 1993 20057 CCM 8106 sec Determination of structure of E. coli
trp repressor molecular system [21]
1504 atoms, with 6014 constraints
127 CM-5/512 1995 20057 CFD 15,000 CPU
sec, 30 CPU
sec per each
of 500
timesteps
Flare maneuver of a large ram-air
parachute. Reynolds number = 10 million.
Algebraic turbulence model [52,53]
469,493 nodes. 455,520
hexahedral elements.
3,666,432 equations solved
per timestep.
128 CM-5/512 1995 20057 CFD 500
Timesteps
Parafoil with flaps flow simulation. [52] 2,363,887 equations solved
at each timestep
129 CM-5/512 1995 20057 CFD Parafoil with flaps flow simulation. [52] 2,363,887 equations solved
at each of 500 timesteps
130 CM-5/512 1995 20057 CFD Fighter aircraft at Mach 2.0. [52] 3D mesh of 367,867
nodes, 2, 143,160
tetrahedral elements, and
1.7 million coupled
nonlinear equations solved
per timestep.
131 CM-5/512 1996 20057 CFD 500 timesteps Steady-state parafoil simulation, 10 deg
angle of attack, Reynolds number = 10
million [54]
2.3 million equations every
timestep
132 CM-5/512 1996 20057 CFD 500 timesteps Inflation simulation of large ram-air
parachute, Box initially at 10 deg angle of
attack and velocity at 112 ft/sec, 2
simulated seconds. [55]
1,304,606 coupled
nonlinear equations solved
every timestep.
133 CM-5/512 1996 20057 CFD 7500 CPU
sec = 50 CPU
sec x 150
timesteps
Aerodynamics of missile at Mach 2.5, 14
deg angle of attack for laminar and
turbulent viscous effects. [34]
763,323 nodes and
729,600 elements.
3,610,964 coupled
nonlinear equations solved
in each of 150 pseudo-time
steps.


201-202

Machine Year CTP Time Problem Problem size
134 CM-5/512 1996 20057 CFD Steady-state parafoil simulation, 10 deg
angle of attack [54]
2.3 million equations x 500 timesteps; Reynolds number10
million
135 CM-5/512 1996 20057 CFD Inflation simulation of large ram-air
parachute, Box initially at 10 deg angle of
attack and velocity at 112 ft/sec, 2
simulated seconds. [55]
1,304,606 coupled
nonlinear equations solved
each of 500 timesteps.
136 CM-5/512 1996 20057 CFD Flare simulation of large ram-air parachute.
[55]
3,666,432 coupled
nonlinear equations solved
every timestep.
137 CM-5/512 1996 20057 CFD 3D simulation of round parachute, Reynolds
number = 1 million [56]
138 CM-5/512 1996 20057 CFD 3D study of missile aerodynamics, Mach
3.5, 14 deg angle of attack, Reynolds
number = 14.8 million [57]
340,000 element mesh,
nonlinear system of
1,750,000+ equations
solved every timestep.
139 CM-5/512 1997 20057 CFD 30 hours Parafoil simulation. [54] 1 million equations solved
500 times per run
140 CM-5/512 20057 CCM 8106 sec
141 C916 1995 21125 CSM 900 CPU hours Hardened structure with internal
explosion, portion of the overall
structure and surrounding soil.
DYNA3D, nonlinear, explicitly, FE code.
Nonlinear constituative models to
simulate concrete & steel. [58]
144,257 solid & 168,438
trust elements for
concrete & steel bars,
17,858 loaded surfaces,
500,000 DOF,
60 msec simulated time
142 Cray C916 1995 21125 CWO CCM2, Community Climate Model, T170
[28]
512 x 256 transform grid, 2.4
Gbytes memory, 53. Gflops
143 Cray C916 1995 21125 CWO IFS, Integrated Forecasting System, T213
[59]
640 grid points/latitude, 134,028
points/horizontal layer, 31
vertical layers
144 Cray C916 1995 21125 CWO ARPS, Advanced Regional Prediction
System, v 3.1 [60]
64 x 64 x 32, 6 Gflops
145 Cray C916 1996 21125 CSM 325 CPU hours
3 day continuous
run
Explosion engulfing a set of buildings,
DYNA3D analysis to study effects on
window glass & doors done off-line
after the blast simulation completed [61]
825 Mwords memory
146 Cray C916 1996 21125 CWO 45 min ARPS, Advanced Regional Prediction
System, v 4.0, coarse scale forecast [51]
96 x 96 cells, 864 x 864 km, 7 hr
forecast
147 Cray C916 1996 21125 CSM 72 hours Explosion engulfing bldgs
148 Cray C916 1996 21125 CFD 3D simulation of flow past a tuna w/
oscillating caudal fin. Adaptive remeshing.
Integrated with rigid body motion [62]
149 Cray C90/16 1997 21125 CFD 9 months 3D simulation of submarine with unsteady
separating flow, fixed angle of attack, fixed
geometry [35]
150 Cray C916 1998 21125 CWO ~5hrs Global atmospheric forecast, Fleet
Numerical operational run [43]
480 x 240 grid; 30 vertical layers
151 Cray C916 21125 CSM 200 hours 2D model of effects of nuclear blast on
structure [20]
152 Cray C916 21125 CSM 600 hours 3D model of effects of nuclear blast on
structure [20]
153 Cray C916 21125 CSM several
hundred
hrs
Modeling effects of complex defensive
structure [20]
154 Cray C916 21125 CFD Modeling of turbulent flow about a
submarine [20]
155 CEWES SP-
2/229
21506 FMS 2 hours Synthetic forces experiments [42] 9739 vehicles
156 Origin2000/
64
1997 23488 CFD ARC3D: simple 3D transient Euler variant
on a rectilinear grid. [63]
157 Paragon 1024 1995 24520 CWO PCCM2, Parallel CCM2, T42 [46] 128 x 64 transform grid, 2.2 Gflops


203-204

Machine Year CTP Category Time Problem Problem size
158 Intel
Paragon/1024
24520 CCM .914 sec/
timestep
5 million atoms
159 Intel
Paragon/1024
24520 CCM .961 sec/
timestep
10 million atoms
160 ORNL
Paragon/1024
24520 FMS 2 hours Synthetic forces experiments [42] 16995 vehicles
161 Intel
Paragon/1024
24520 CCM 8.54 sec/
timestep
50 million atoms
162 Paragon 1024 24520 CCM 82 sec/
timestep
200 million particles
163 Paragon 1024 24520 CCM 82 sec/
timestep
400 million particles
164 ORNL
Paragon/1024
24520 FMS 2 hours Synthetic forces experiments [42] 16606 vehicles, 20,290 entities
165 Intel Paragon/1024 1993 24520 CCM .0282 CPUs/
timestep
MD simulation using short-range
forces model applied to 3D
configuration of liquid near solid state
point [15]
100,000 atoms
166 Intel Paragon/1024 1993 24520 CCM .199 CPUs/
timestep
MD simulation using short-range
forces model applied to 3D
configuration of liquid near solid state
point [15]
1 million atoms
167 Intel Paragon/1024 1993 24520 CCM .914 CPUs/
timestep
MD simulation using short-range
forces model applied to 3D
configuration of liquid near solid state
point [15]
5 million atoms
168 Intel Paragon/1024 1993 24520 CCM .961 CPUs/
timestep
MD simulation using short-range
forces model applied to 3D
configuration of liquid near solid state
point [15]
10 million atoms
169 Intel Paragon/1024 1993 24520 CCM 8.54 CPUs/
timestep
MD simulation using short-range
forces model applied to 3D
configuration of liquid near solid state
point [15]
50 million atoms
170 Intel Paragon/1024 1995 24520 CCM 160 CPUs/
timestep
MD simulation of 400 million particles using
pair-wise 6-12 Lennard Jones potential
[40]
400 million atoms
171 Cray T3D/400 1995 25881 CWO IFS, Integrated Forecasting System, T213
[59]
640 grid points/latitude, 134,028
points/horizontal layer, 31
vertical layers
172 Mercury Race
(140 PowerPC
603e processors)
1997 27113 SIP Large Mercury System shipped in 1997 [64]
173 Cray T3D/512 1993 32398 CCM .0293 CPUs/
timestep
MD simulation using short-range
forces model applied to 3D
configuration of liquid near solid state
point [15]
100,000 atoms
174 Cray T3D/512 1993 32398 CCM .205 CPUs/
timestep
MD simulation using short-range
forces model applied to 3D
configuration of liquid near solid state
point [15]
1 million atoms
175 Cray T3D/512 1993 32398 CCM .994 CPUs/
timestep
MD simulation using short-range
forces model applied to 3D
configuration of liquid near solid state
point [15]
5 million atoms
176 Cray T3D/512 1993 32398 CCM 1.85 CPUs/
timestep
MD simulation using short-range
forces model applied to 3D
configuration of liquid near solid state
point [15]
10 million atoms
177 Cray
T3D/512
1995 32398 CFD 500 timesteps Steady-state parafoil simulation. 2 deg
angle of attack. Reynolds number = 10
million. [52]
38 million coupled
nonlinear equations at
every pseudo-timestep.
178 Cray
T3D/512
1995 32398 CFD Modeling paratroopers dropping from
aircraft. Moving grid. Cargo aircraft
travelling at 130 Knots. High Reynolds
number. Smagorinsky turbulence model
[52]
880,000 tetrahedral
elements for half of the
domain.
179 Cray
T3D/512
1995 32398 CFD Fighter aircraft at Mach 2.0. [52] 3D mesh of 185,483 nodes
and 1,071,580 tetrahedral
elements


205

Machine Year CTP Category Time Problem Problem size
180 Intel Paragon mid
1990s
44000 CSM <24 hours 3D shock physics simulation [12] 6 million cells
181 Intel Paragon mid
1990s
44000 CSM several
restarts
3D shock physics simulation [12] 20 million cells
182 Intel Paragon 1994 46000 nuclear overnight 3D reduced physics simulation of
transient dynamics of nuclear weapon [12,65]
183 ASCI Red 1997 46000 CSM few
hundred
hours
3D modeling of explosive material impact
on copper plate [12]
184 Origin2000/128 1997 46,928 CEA Radar cross section of VFY218
aircraft under 2 GHz radar wave [66]
1.9 million unknowns
185 Origin2000/192 1998 70368 CFD 400 hours armor/anti-armor, 3-D
186 Origin2000/192 1998 70368 CFD months 3D simulation of submarine with unsteady
flow, fixed angle of attack, and moving
body appendages and complex repulsors. [35]
187 ASCI Red/1024 1997 76000 CSM <25 hours 3D shock physics simulation [12] 100 million cells
188 ASCI Red/1024 1997 76000 CSM <50 hours 3D shock physics simulation [12] 250 million cells
189 ASCI Red/1024 1997 76000 CSM few hours 3D shock physics simulation [12] 2-4 million cells
190 80000 SIP Tier 2 UAV on-board data processing [20]
191 Cray T3E-
900/256
1997 91035 CFD 500 timesteps Steady-state parafoil simulation, 10 deg
angle of attack, Reynolds number = 10
million. [54]
2.3 million equations every
timestep
192 Cray T3E-
900/256
1997 91035 CWO 590 hrs Global ocean model "hindcast" [49] 1/16 degree, 7 km resolution
193 Cray T3E-900
256 nodes
1998 91035 CSM 450,000 node
hours
Grizzly breaching vehicle plow
simulation, parametric studies, different
soil conditions & blade speeds [67]
2 to 4 million particles
(soil, rock, mines, obstacles, etc.)
194 ASCI++ ?? 50,000,000+ nuclear days First principles 3D modeling

References

[1] Aloisio, G. and M. A. Bochicchio, "The Use of PVM with Workstation Clusters for Distributed SAR Data Processing," in High Performance Computing and Networking, Milan, May 3-5, 1995. Proceedings, Hertzberger, B. and G. Serazzi, Eds.: Springer, 1995, pp. 570-581.

[2] Liu, S. K., Numerical Simulation of Hypersonic Aerodynamics and the Computational Needs for the Design of an Aerospace Plane, RAND Corporation, Santa Monica, CA, 1992.

[3] Ballbaus Jr., W. F., "Supercomputing in Aerodynamics," in Frontiers of Supercomputing, Metropolis, N. et al, Eds. Berkeley, CA: University of California Press, 1986, pp. 195-216.

[4] Ewald, R. H., "An Overview of Computing at Los Alamos," in Supercomputers in Theoretical and Experimental Science, Devreese, J. T. and P. van Camp, Eds. New York and London: Plenum Press, 1985, pp. 199-216.


206

[5] Smith, N. P., "PAM Crash: HPC's First Killer App Adapts to New Needs," HPCWire, Mar 28, 1997 (Item 10955).

[6] Graves, R., "Supercomputers: A Policy Opportunity," in Supercomputers: a key to U.S. scientific, technological, and industrial preeminence, Kirkland, J. R. and J. H. Poore, Eds. New York: Praeger Publishers, 1987, pp. 129-140.

[7] Albrizio, R. et al, "Parallel/Pipeline Multiprocessor Architectures for SAR Data Processing," European Trans. on Telecommunication and Related Technologies, Vol. 2, No. 6, Nov.-Dec., 1991, pp. 635-642.

[8] Holst, T. L., "Supercomputer Applications in Computational Fluid Dynamics," in Supercomputing 88, Volume II: Science and Applications, Martin, J. L. and S. F. Lundstrom, Eds. Los Alamitos: IEEE Computer Society Press, 1988, pp. 51-60.

[9] Binder, K., "Large-Scale Simulations in Condensed Matter Physics - The Need for a Teraflop Computer," International Journal of Modern Physics C, Vol. 3, No. 3, 1992, pp. 565-581.

[10] Chaput, E., C. Gacherieu, and L. Tourrette, "Experience with Parallel Computing for the Design of Transport Aircrafts at Aerospatiale," in High-Performance Computing and Networking. International Conference and Exhibition HPCNEurope 1996. Proceedings, Liddel, H. et al, Eds. Berlin: Springer-Verlag, 1996, pp. 128-135.

[11] Games, R. A., Benchmarking for Real-Time Embedded Scalable High Performance Computing, DARPA/Rome Program Review, May 6, 1997.

[12] Camp, W. J. et al, Interviews, Sandia National Laboratories, Dec. 11, 1997.

[13] Farhat, C., "Finite Element Analysis on Concurrent Machines," in Parallel Processing in Computational Mechanics, Adeli, H., Ed. New York: Marcel Dekker, Inc., 1992, ch. 7, pp. 183-217.

[14] Simon, H. D., W. R. Van Dalsem, and L. Dagum, "Parallel CFD: Current Status and Future Requirements," in Parallel Computational Fluid Dynamics: Implementation and Results, Simon, H. D., Ed. Cambridge: Massachusetts Institute of Technology, 1992, pp. 1-29.

[15] Plimpton, S., "Fast Parallel Algorithms for Short-Range Molecular Dynamics," Journal of Computational Physics, Vol. 117, 1995, pp. 1-19.


207

[16] Strietzel. M.. "Parallel Turbulence Simulation Based on MPI," in High-Performance Computing and Networking. International Conference and Exhibition HPCN Europe 1996. Proceedings, Liddel, H. et al, Eds. Berlin: Springer-Verlag, 1996. pp 283-289.

[17] Kimsey, K. and S. Schraml, Interview, Army Research Lab, Aberdeen Proving Ground, June 12, 1997.

[18] Shang, J. S. and K. C. Hill, "Performance of a Characteristic-Based, 3-D, Time-Domain Maxwell Equations Solver on the Intel Touchstone Delta," Applied Computational Electromagnetics Society Journal, Vol. 10, No. 1, May, 1995, pp. 52-62.

[19] U.S. Navy To Use Mercury RACE Systems for Advanced Radar, HPCWire, Jun 27, 1997 (Item 11471).

[20] Goodman, S., P. Wolcott, and G. Burkhart, Executive Briefing: An Examination of High-Performance Computing Export Control Policy in the 1990s, IEEE Computer Society Press, Los Alamitos, 1996.

[21] Pachter, R. et al, "The Design of Biomolecules Using Neural Networks and the Double Iterated Kalman Filter," in Toward Teraflop Computing and New Grand Challenges. Proceedings of the Mardi Gras '94 Conference, Feb. 10-12, 1994, Kalia R. K. and P. Vashishta Eds. Commack, NY: Nova Science Publishers, Inc., 1995, pp. 123-128.

[22] The Role of Supercomputers in Modern Nuclear Weapons Design, Department of Energy (Code NN-43), Apr. 27,1995.

[23] Shang, J. S., "Characteristic-Based Algorithms for Solving the Maxwell Equations in the Time Domain," IEEE Antennas and Propagation Magazine, Vol. 37, No. 3, June, 1995, pp. 15-25.

[24] Morgan, W. B. and J. Gorski, Interview, Naval Surface Warfare Center, Jan. 5, 1998.

[25] Shang, J. S., D. A. Calahan, and B. Vikstrom, "Performance of a Finite Volume CEM Code on Multicomputers," Computing Systems in Engineering, Vol. 6, No. 3, 1995, pp. 241-250.

[26] Hurwitz, M., Interview, Naval Surface Warfare Center, Aug. 6, 1997.

[27] Shang, J. S., "Time-Domain Electromagnetic Scattering Simulations on Multicomputers," Journal of Computational Physics, Vol. 128,1996, pp. 381-390.

[28] Hack, J. J., "Computational design of the NCAR community climate model," Parallel Computing, Vol. 21, No. 10,1995, pp. 1545-1569.


208

[29] Ghaffari, F. and J. M. Luckring, Applied Computational Fluid Dynamics.
http://wk122.nas.nasa.gov/NAS/TechSums/9596/Show?7126.

[30] Chaderjian N. M. and S. M. Murman, Unsteady Flow About the F-18 Aircraft,
http://wk122.nas.nasa.gov/NAS/TechSums/9596/Show?7031.

[31 ] Potsdam, M. A., Blended Wing/Body Aircraft,
http://wk122.nas.nasa.gov/NAS/TechSums/9596/Show?7650.

[32] Rice, B. M., "Molecular Simulation of Detonation," in Modern Methods for Multidimensional Dynamics Computations in Chemistry, Thompson, D. L., Ed.: World Scientific Publishing Co., 1998, . (To appear)

[33] Mercury RACE Selected for Navy Sonar System, HPCWire, Jul 26, 1996 (Item 8916).

[34] Sturek, W. et al, Parallel Finite Element Computation of Missile Flow Fields, Preprint 96012, University of Minnesota Army HPC Research Center, Minneapolis, MN, 1996.

[35] Boris, J. et al, Interview, Naval Research Lab, Washington, DC, June 9, 1997.

[36] Nakano, A., R. K Kalia, and P. Vashishta, "Million-Particle Simulations of Fracture in Silica Glass: Multiresolution Molecular Dynamics Approach on Parallel Architectures," in Toward Teraflop Computing ana' New Grand Challenges. Proceedings of the Mardi Gras '94 Conference, Feb. 10-12, 1994, Kalia, R. K. and P. Vashishta, Eds. Commack, NY: Nova Science Publishers, Inc., 1995, pp. 111-122.

[37] Pankajakshan, R. and W. R. Briley, "Efficient parallel flow solver," in Contributions to DoD Mission Success from High-Performance Computing 1996.:, 1996, p. 73.

[38] Conlon, J., "Propelling power of prediction: Boeing/NASA CRA leverages the IBM SP2 in rotor analysis," Insights, No. 3, Sep, 1997, pp. 2-9.

[39] Faulkner, T., "Origin2000 update: Studies show CFL3D can obtain 'reasonable' performance," NASNews, Vol. 2, No. 17, November-December, 1997 (http://science.nas.nasa.gov/Pubs/NASnews/97/11/).

[40] Deng, Y. et al, "Molecular Dynamics for 400 Million Particles with Short-Range Interactions," in High Performance Computing Symposium 1995 "Grand Challenges in Computer Simulation" Proceedings of the 1995 Simulation Multiconference, Apr. 9-13, 1995, Phoenix AZ, Tentner, A., Ed.: The Society for Computer Simulation, 1995, pp. 95-100.


209

[41] Rowell. J . "Rome Labs Deploys Paragon. 32 HIPPIs for COTS Radar Processing," HPCWire, Dec 4, 1995 (Item 7556).

[42] Brunett, S. and T Gottschalk, Large-scale metacomputing gamework for ModSAF. A realtime entity simulation, Center for Advanced Computing Research, California Institute of Technology (Draft copy distributed at SC97, November, 1997).

[43] Rosmond, T., Interview, Naval Research Laboratory, Monterey, July 28, 1997

[44] Sawdey, A. C., M. T. O'Keefe, and W. B. Jones, "A general programming model for developing scalable ocean circulation applications," in ECMWF Workshop on the Use of Parallel Processors in Meteorology, Jan. 6, 1997. :, 1997,.

[45] Drake, J., "Design and performance of a scalable parallel community climate model," Parallel Computing, Vol. 21, No. 10, 1995, pp. 1571-1591.

[46] Wehner, M. F., "Performance of a distributed memory finite difference atmospheric general circulation model," Parallel Computing, Vol. 21, No. 10,1995, pp. 1655-1675.

[47] Balsara, J. and R. Namburu, Interview, CEWES, Aug. 21,1997.

[48] Wallcraft, A., Interview, Naval Oceanographic Office (NAVPCEANO), Stennis Space Center, Aug. 20, 1997.

[49] Sela, J. G., "Weather forecasting on parallel architectures," Parallel Computing, Vol. 21, No. 10,1995, pp. 1639-1654.

[50] Jann, D. E. and K. K. Droegemeier, "Proof of Concept: Operational testing of storm-scale numerical weather prediction," CAPS New Funnel, Fall, 1996, pp. 1-3 (The University of Oklahoma).

[51] Tezduyar, T. et al, "Flow simulation and high performance computing." Computational Mechanics, Vol. 18, 1996, pp. 397-412.

[52] Tezduyar, T., V. Kalro, and W. Garrard, Parallel Computational Methods for 3D Simulation of a Parafoil with Prescribed Shape Changes, Preprint 96-082, Army HPC Research Center, Minneapolis, 1996.

[53] Muzio, P. and T. E. Tezduyar, Interview, Army HPC Research Center, Aug. 4, 1997.

[54] Stein, K. et al, Parallel Finite Element Computations on the Behavior of Large Ram-Air Parachutes, Preprint 96-014, University of Minnesota Army HPC Research Center, Minneapolis, MN, 1996.


210

[55] Stein. K.. A. A. Johnson, and T. Tezduyar. "Three-dimensional simulation of round parachutes," in Contributions to DoD Mission Success from High Performance Computing 1996. :, 1996, p. 41.

[56] Army High Performance Computing Research Center,, Army HPC Research Center, Minneapolis, MN, 1996.

[57] Papados, P. P. and J. T. Baylot, "Structural Analysis of Hardened Structures," in Highlights in Computational Science and Engineering. Vicksburg: U.S. Army Engineer Waterways Experiment Station, 1996, pp. 64-65.

[58] Barros, S. R. M., "The IFS model: A parallel production weather code," Parallel Computing, Vol. 21, No. 10, 1995, pp. 1621-1638.

[59] Droegemeier, K. K. et al, "Weather Prediction: A Scalable Storm-Scale Model," in High performance computing: problem solving with parallel and vector architectures, Sabot, G. W., Ed. Reading, MA: Addison-Wesley Publishing Company, 1995, pp. 45-92.

[60] King, P. et al, "Airblast Effects on Concrete Buildings," in Highlights in Computational Science and Engineering. Vicksburg: U.S. Army Engineer Waterways Experiment Station, 1996, pp. 34-35.

[61 ] Ramamurti, R. and W. C. Sandberg, "Simulation of flow past a swimming tuna," in Contributions to DoD Mission Success from High Performance Computing 1996.:, 1996, p. 52.

[62] Taft, J., "Initial SGI Origin2000 Tests Show Promise for CFD Codes," NASNews, Vol. 2, No. 25, July-August, 1997 (http://www.sci.nas.nasa.gov/Pubs/NASnews/97/07/article01.html).

[63] Mercury Delivers 38 GFLOPS With 200-MHz PowerPC Processors, HPCWire, Jan 31, 1997 (Item 10694).

[64] Nielsen, D., Private Communication, Jan. 15, 1998.

[65] NCSA's Origin2000 Solves Large Aircraft Scatterer Problems, HPCWire, Jun 13, 1997 (Item 11365).

[66] Homer, D. A., Interview, CEWES, Aug. 22,1997.


211

[67] Accelerated Strategic Computing Initiative: PathForward Project Description., December 27, 1996. Lawrence Livermore National Laboratory, Los Alamos National Laboratory, Sandia National Laboratory, 1996.


Table of Contents